A telerobotic virtual control system
نویسندگان
چکیده
A project to develop a telerobotic "virtual control" capability, currently underway at the University of Toronto, is described. The project centres on a new mode of interactive telerobotic control based on the technology of combining computer generated stereographic images with remotely transmitted stereoscopic video images. A virtual measurement technique, in conjunction with a basic level of digital image processing, comprising zooming, parallax adjustment, edge enhancement and edge detection have been developed to assist the human operator in visualisation of the remote environment and in spatial reasoning. The aim is to maintain target recognition, tactical planning and high level control functions in the hands of the human operator, with the computer performing low level computation and control. Control commands initiated by the operator are implemented through manipulation of a virtual image of the robot system, merged with a live video image of the remote scene. This paper discusses the philosophy and objectives of the project, with emphasis on the underlying human factors considerations in the design, and reports the progress made to date in this effort. 1. BACKGROUND AND PROBLEM DOMAIN Advancing technology is increasingly providing more powerful tools for transferring human perceptual and cognitive capabilities to remote or hazardous environments. With the objective of combining machine computational power with human intelligence in teleoperation, the Teleoperation and Control Laboratory at the University of Toronto is currently undertaking a project to develop a telerobotic “virtual control” capability. To clarify our concept of virtual control, as well as how this project differs from other related efforts reported in the literature, it is helpful to establish a taxonomy of telerobotic operating scenarios, together with a summary of the various levels of human operator control which are either currently possible or foreseen. In this section we describe our task objectives, in terms of our classification of teleoperation systems. A number of classification schemes for describing teleoperation have been proposed by other authors. One of these, set forth by Sheridan and his colleagues, categorises teleoperation systems according to degree of control automation [Sheridan, 1984]. On a spectrum ranging from minimal to maximal autonomy, teleoperation can be accomplished by means of : (1) manual control without computer aiding; (2) manual control with significant computer transforming or aiding; (3) supervisory control with a major portion of control performed by the human operator; (4) supervisory control with a major portion of control performed by computer; (5) fully automatic control, where human operators primarily observe the process without influencing it. The goal of our project is to elevate human mediated teleoperation from level 1, at which telemanipulation typically takes place at present, to level 2 or 3, depending on the situation. Proc. SPIE, Vol. 1612, Cooperative Intelligent Robotics in Space II, Boston, Nov. 11-13, 1991. – 2 – Teleoperation tasks can be categorised also according to the type of environment in which they take place. In our taxonomy, we classify tasks according to the degree of uncertainty about the task environment, the objects being manipulated, and the operations that need to be performed. Case 1: Remote World Fully Modelled In this category, the objects being manipulated, the operating environment and the operational procedures (the actions) are either repetitive or varying but predictable. In the former case, which corresponds to most current scenarios in industrial robotics, a programmed robot can handle the task with little human intervention. In the latter case, the human operator is able to operate the remote robot by selecting predetermined routines or by combining known operational elements, which can be considered as part of a given vocabulary within a command language (e.g. “Move to Point A”) [Sheridan, 1988]. Case 2: Remote World Partially Modelled This category includes operations in man-made environments, such as nuclear power plants and space platforms, where all the potential operational procedures of the task can not necessarily be anticipated in sufficient detail to enable predetermined routines to be created. In such environments, however, some geometrical knowledge of the environment and the objects within it can be modelled a priori, even though the spatial relationships between the objects and their environment are not known exactly. In other words, the remote world is partially known, or unstructured. Many graphical simulation or virtual reality techniques have been proposed to aid teleoperation in this case [e.g. Ince et al, 1991; Rocheleau et al, 1991; Browse et al, 1991; Spofford et al, 1991; Bejczy et al, 1990]. These techniques essentially assume the availability of geometrical knowledge of portions of the environment and of some of the objects within it, which otherwise could not be simulated. In such cases, support to the human operators can be achieved by constructing a database of geometrical models of known portions of the task environment and providing the operator with a completely graphical simulation of the remote worksite. Alternatively, wire-frame images of relevant objects in the environment can be introduced into a video image of the real worksite. With state of the art graphics technology, it is relatively straightforward to create such virtual images, but the difficulty of continuously coordinating virtual images with objects in the real task space remains. A variety of means for measuring and updating estimates of the real world, such as laser scanners, laser pointers, sonar scanners, video imaging, etc., can be used for this purpose [e.g. Takase, 1991; Christensen et al, 1991]. Case 3. Remote World Unknown This category differs from the preceding one in that little or no geometrical knowledge about the remote environment and the objects within it is assumed to be available beforehand. The operational domains which most readily come to mind as examples of this case include underwater robotics, mining, nuclear waste cleanup and military robotics [e.g. Grodski et al, 1991]. From the literature it is evident that, beyond Sheridan's original display aiding concepts [Sheridan, 1984], relatively little effort to provide the human operator with more advanced assists has been targeted to this category. Although interactive modelling has been suggested for adding unknown objects on-line into the geometric database [e.g. Oxenberg et al, 1988; Burtnyk et al, 1990], the feasibility of letting the human operator manually teach the database during actual teleoperation, when time constraints and environmental complexity can be an important factor, is questionable. Work underway in our laboratory is aimed primarily at assisting operators in dealing with remote manipulation tasks in the unknown environments (case 3), by providing a toolbox of means, based primarily on stereoscopic video displays, superimposed stereoscopic computer graphics and machine vision, for interactively probing a remote unknown world, making quantitative measurements of relevant landmarks, object locations and required teleoperator trajectories, and communicating these to the computer for further, higher level machine control. Proc. SPIE, Vol. 1612, Cooperative Intelligent Robotics in Space II, Boston, Nov. 11-13, 1991. – 3 – 2. OBJECTIVES OF THE PROJECT As a well established principle in human factors engineering, a well-designed human machine system should combine what humans are good at with what machines are good at. As technology advances, the areas at which machines are good are expanding, forcing designers of human machine systems to take these trends into account. In the context of telemanipulation, technology is bringing improvements to both low level perception (machine vision) and dexterous motion / sensor based robotics [Burtnyk et al, 1990]. Machines (computers) are nevertheless still very poor at high level perception and cognition, that is, object recognition, situation understanding, decision making and strategic task planning. Clearly therefore, future teleoperation systems should exploit accurate and reliable low-level machine capabilities and leave human operators to concentrate on higher level functioning. In other words, we aim to exploit machine power to off-load the human operator and thereby elevate her from continuous, purely manual control to higher level cognitive and perceptual functions, such as pattern recognition and decision making. Figure 1 illustrates schematically this concept of partitioning human and machine functions. In transiting from traditional teleoperation interfaces (Fig. 1a) to interfaces for advanced teleoperation (Fig. 1b), the attempt to move the human-machine interface leftwards necessitates a concentrated research and development effort. The most challenging work lies in the area of bilateral human machine communication. Referring to the feedback pathway (the block labelled Sensing + Perception), work is necessary in transforming what the machine “sees” and presenting this in an appropriate form to the human operator, to facilitate visualisation of the remote worksite. Referring to the forward pathway (the block labelled Instruction + Control), the human operator’s understanding of the situation, the objects she recognises, and her plans for carrying out the task all have to be transformed into actions that the robot can perform, by means of flexible interactive control concepts. The ultimate goal of our project is to develop a “virtual control” mode, which involves effort along both of the pathways illustrated in Fig. 1b. In this new mode, a video/graphics interface serves as both perceptual aiding device and control mediator. In the Sensing + Perception channel, tools for aiding the human operator to visualise and measure the remote scene, as described in next section, are being developed. In the Instruction + Control channel, a graphic pointer and/or a virtual robot are being employed for control mediation. That is, instead of steering the real telerobot directly, the human operator will be able to express her instructions and controls either by manipulating the robot end effector via the graphic pointer, by means of resolved control in three dimensions, or by manipulating an entire virtual robot and allowing the real robot to follow afterwards. In addition to its control function the virtual robot superimposed upon the real scene is therefore also a graphical tool for enhancing the human operator’s perception of the real robot within the constraints of the real environment, for which an explicit model is not necessary (case 3). To reflect environmental constraints, the virtual robot should be geometrically identical to the real robot. For the purpose of reflecting the human operator’s commands / instructions, on the other hand, the virtual robot does not necessarily have to replicate the real robot kinematics and dynamics. Concepts similar to our virtual control concept have been reported by other authors [e.g. Browse & Little, 1991; Bejczy et al, 1990; Conway et al, 1988]. All of those virtual robot concepts are intended basically as a tool for the human operator to explore feasible plans and express these to the robot control system. One potential application of such a control means, for instance, is for implemention of predictor displays, for overcoming time delays in manual control loops. The principal characteristic of our virtual control concept which distinguishes it from the others, however, is that our virtual robot is constrained to the real environment even though the computer does not possess an explicit world model of that environment. If the operator’s plan is unfeasible and would result, for example, in a potential collision, the adverse consequences of the plan would be identified not just by human perception but by means of a very basic software collision detection capability, which will operate interactively, in cooperation with the human operator. Proc. SPIE, Vol. 1612, Cooperative Intelligent Robotics in Space II, Boston, Nov. 11-13, 1991.
منابع مشابه
VR-Based Teleoperation for Robot Compliance Control
Robots governed by remote human operators are excellent candidates for work in hazardous or uncertain environments such as nuclear plants or outer space. For successful teleoperation, it is important to let the operator feel physically present at the remote site. When the telerobotic system is used to execute compliance tasks in which simultaneous control of both position and force may be deman...
متن کاملBaxter's Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing
Expensive specialized systems have hampered development of telerobotic systems for manufacturing systems. In this paper we demonstrate a telerobotic system which can reduce the cost of such system by leveraging commercial virtual reality(VR) technology and integrating it with existing robotics control software. The system runs on a commercial gaming engine using off the shelf VR hardware. This ...
متن کاملA Design and Control Environment for Internet-Based Telerobotics
This paper describes an environment for the design, simulation and control of Internet-based force-reeecting telerobotic systems. We deene these systems as using a segment of the computer network to connect the master to the slave. Computer networks introduce a time-delay that is best described by a time-varying random process. Thus, known techniques for controlling time-delay telerobots are no...
متن کاملVirtual Reality Telerobotic System
This paper describes a telerobotic system operated through a virtual reality (VR) interface. A least squares method is used to find the transformation mapping, from the virtual to real environments. Results revealed an average transformation error of 3mm. The system was tested for the task of planning minimum time shaking trajectories to discharge the contents of a suspicious package onto a wor...
متن کاملControl Gain Adaptation in Virtual Reality Mediated Human–Telerobot Interaction
The Internet connects millions of computers worldwide, and provides a new potential working environment for remote-controlled telerobotic systems. The main limitation of using the Internet in this application is random delays between communicating nodes, which can cause disturbances in human–machine interaction and affect telepresence experiences. This is particularly important in systems integ...
متن کاملACME, A Telerobotic Active Measurement Facility
We are developing a robotic measurement facility which makes it very easy to build “reality-based” models, i.e., computational models of existing, physical objects based on actual measurements. These include not only models of shape, but also reflectance, contact forces, and sound. Such realistic models are crucial in many applications, including telerobotics, virtual reality, computer-assisted...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1991